Skip to main content

Docker

Docker Images

  • A Docker image is a lightweight, standalone, and executable package that contains everything needed to run an application, including the code, runtime, libraries, and system tools. Docker images are built from a Dockerfile, which is a script that specifies the application's dependencies, configuration, and environment.

  • Docker images are designed to run in a containerized environment, which provides isolation and resource management for the application. This means that the same Docker image can run consistently across the different environments, such as development, testing, and production. Docker images are stored in a Docker registry, such as Docker Hub, and can be easily distributed and shared among teams and organizations.

  • A good analogy from ChatGPT:

    Imagine that you want to bake a cake. The recipe for the cake is like a Dockerfile, which specifies the ingredients, quantities, and steps needed to make the cake. You follow the recipe to bake the cake, and once it's done, you have a finished product, which is like a Docker image.

Now, let's say you want to transport the cake to a party. You don't want the cake to get squished or ruined during transport, so you put it in a cake container that protects it and keeps it fresh. This cake container is like a Docker container, which is a runtime instance of a Docker image.

Just as you can bake many cakes using the same recipe, you can build many Docker images using the same Dockerfile. And just as you can transport many cakes in different containers, you can run many Docker containers using the same Docker image.

Docker Volumes

  • Docker creates a separate environment so the file system of docker will be different from the file system of the environment that run docker (in our cases, our local laptop environment)
  • We need a way to let the application running inside Docker access the root environment's file system and vice versa ⇒ Volume
  • Use Volume so that the root environment will tell Docker that the application inside Docker generates/deletes any files or if we create/delete any file in the root environment then the changes would have to be made in both environments.
  • Normally we would have to build using docker build and docker run after every changes that we make to the code, which is inefficient and can take a large amount of time
  • We can mount the specific code folder in our local laptop to the code folder inside the container, so that every change we make locally will be apply instantly to the mounted folder inside the container ⇒ See the change immediately without having to rebuild image, which can create lots of unnecessary versioning

Docker Stage

  • Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain
  • It was very common to have one Dockerfile to use for development and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it => Maintaining 2 Dockerfile is not ideal
  • With Multistages builds, you use multiple FROM statements in your Dockerfile. Each FROM can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving everything you don't want in the final image.
FROM node:lts as dev
WORKDIR /app
COPY ["package.json", "yarn.lock*", "/prisma", "./"]
RUN apt-get install openssl
RUN yarn install --frozen-lockfiles && yarn generate:prisma
COPY . .

FROM node:lts as builder
WORKDIR /app
COPY --from=dev /app ./
RUN yarn build && yarn install --frozen-lockfiles --production

FROM node:lts as prod
WORKDIR /app
COPY --from=builder /app ./
CMD [ "node", "dist/index.js" ]

Export Port vs Mapping Port

Docker Layer Caching

  • Each line in the Docker file be created as a layer during docker build
  • Docker will automatically cache the build layer
  • Every-time we run docker build Docker will try to find in cache, and if there is any changes in any layer, all the layer below it will not be cache anymore and has to be rebuild

    Put less-frequently changed and heavy layer on top (install dependencies command for example) and more-frequent changed and light weight layer in the bottom

Docker Compose

  • Docker Compose is a tool that allows you to define and run multi-container Docker applications. With Docker Compose, you can define the configuration of all the containers that make up your application in a single YAML file, called a Compose file.
  • The Compose file allows you to specify the images, environment variables, network settings, and other configuration details for each container in your application. Docker Compose then takes care of creating and starting all the containers, as well as managing the network connections between them.
  • Docker Compose is particularly useful for developing and testing complex applications that require multiple containers. It allows developers to quickly spin up a local environment that closely resembles the production environment, making it easier to test and debug their code. Additionally, Docker Compose makes it easy to scale your application up or down by adding or removing containers as needed.